Tuesday, November 27, 2018

Lambda Post Exploitation – Devil in the Permission

In the last blog post, we covered an exploit scenario where an attacker could get access to all the resources which were part of the AWS account once a vulnerability was identified in a lambda function. A weak implementation of permission controls led to the above mentioned exploit scenario. We concluded the following: -

“The permission controls implemented for lambda functions may not seem critical while executing the lambda function stand-alone or by looking at the features or from a developer standpoint but it might turn out to be catastrophic for the overall AWS infrastructure because of a combination of vulnerabilities.  It would not only lead to a compromise of the lambda function or backend details but would impact the overall application residing on that particular account. Who knows? – An attacker may end up getting access to secret buckets, sensitive DynamoDB tables, EC2 backups and files etc. Thus, permission controls should be well implemented across the complete infrastructure and not stand-alone lambda functions. We will cover more on permission structures and issues in the coming blog posts.”

All AWS resources (including lambda functions) are owned by an AWS account, and permissions to access these resources are decided through permission policies. It is necessary to understand these permission models and grant proper permissions for all resources. A detailed explanation of AWS permissions is available here: - https://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html#lambda-intro-execution-role.

In our case, the following diagram shows the implementation of the lambda function: -

Let's look at the permission for the target function. We can quickly run our enumeration utility and extract important information about the function as below: -

bliss$python3 enumLambda.py -f processInvoice

==============================================================
enumLambda - Lambda Function Enumeration Script (beta)
(c) Blueinfy solutions pvt. ltd.
==============================================================

(+) Fetching Lambda function processInvoice...
       (+) Platform: python3.6
       (+) Permission: arn:aws:iam::313588302550:role/lambda+s3+dynamo
       (+) Permission Policy: lambda+s3+dynamo
        ==> [{"Effect": "Allow", "Action": "s3:*", "Resource": "*"}]
        ==> [{"Effect": "Allow", "Action": ["dynamodb:DescribeStream", "dynamodb:GetRecords", "dynamodb:GetShardIterator", "dynamodb:ListStreams", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents"], "Resource": "*"}]
        ==> [{"Effect": "Allow", "Action": ["logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents"], "Resource": "*"}]
       (+) xRay-Tracing: PassThrough
       (+) Code-Location: https://awslambda-us-east-2-tasks.s3.us-east-2.amazonaws.com/snapshots/31-----550/processInvoice-f97f48d6-8168---de-9587-a23ceb11c1a4?versionId=a4R0s_z9CyaEodMKd3gD5341&X-Amz-Security-Token=FQoGZXIvYXdzEJv%2F%2F%2F%....
---Function Mapping---
[{'UUID': '7dbec14c-64ce-00e5-92ae-465057ccb435', 'BatchSize': 10, 'EventSourceArn': 'arn:aws:sqs:us-east-2:313588302550:processInvoice', 'FunctionArn': 'arn:aws:lambda:us-east-2:313588302550:function:processInvoice', 'LastModified': datetime.datetime(2018, 8, 14, 11, 7, 44, 705000, tzinfo=tzlocal()), 'State': 'Enabled', 'StateTransitionReason': 'USER_INITIATED'}]
---Function Policy---
    (-)Error: An error occurred (ResourceNotFoundException) when calling the GetPolicy operation: The resource you requested does not exist.
bliss$


We are interested in the permissions assigned to the execution role for the lambda function. It is as below: -

 (+) Permission Policy: lambda+s3+dynamo
        ==> [{"Effect": "Allow", "Action": "s3:*", "Resource": "*"}]
        ==> [{"Effect": "Allow", "Action": ["dynamodb:DescribeStream", "dynamodb:GetRecords", "dynamodb:GetShardIterator", "dynamodb:ListStreams", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents"], "Resource": "*"}]
        ==> [{"Effect": "Allow", "Action": ["logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents"], "Resource": "*"}]


Here, S3 is completely accessible both in terms of "Action" as well as "Resource": –

{"Effect": "Allow", "Action": "s3:*", "Resource": "*"}


Wildcard "*" implies full access, which means that we can access content of all S3 buckets within the account. In the last post, we saw the below script, which concluded the same thing.

The same law is applicable to DynamoDB since it has similar blocks of permissions assigned.

Here is a simple way to identify permission issues through the scanLambda.py script from the lambdaScanner toolkit.

 Conclusion:

The implementation and assignment of permission controls should be decided properly before the deployment of lambda functions. The permissions should be limited to the use cases – if a lambda function needs access to one S3 bucket with only read functionality, then both its "Action" and "Resource" should be defined accordingly. When "*" is assigned, it would lead to complete access of resources – which would be a critical exploit scenario leading to hijacking of all resources of the account. To avoid or limit this damage, it is imperative to do proper permission modelling of the resources. Though, the first step to security is not to have any vulnerability in the lambda function which can lead to such exploit scenarios. For such security, it is necessary to do input validation before the event stream is consumed by the lambda function.

Article by Amish Shah & Shreeraj Shah