Take automated actions against threats and vulnerabilities.
Update: This repository is no longer maintained.
Take automated actions on your Security Command Center findings:
You’re in control:
Function Name | Service | Description |
---|---|---|
CloseBucket | GCS | Removes public access for a GCS bucket |
CloseCloudSQL | CloudSQL | Removes public access for a Cloud SQL instance |
ClosePublicDataset | BigQuery | Removes public access for a BigQuery Dataset |
CloudSQLRequireSSL | Cloud SQL | Automatically configure a Cloud SQL instance to require encryption in transit |
DisableDashboard | Google Kubernetes Engine | Disables the GKE dashboard |
EnableAuditLogs | IAM | Enables Data Access logs |
EnableBucketOnlyPolicy | IAM | Enables Uniform Bucket Access on the bucket in question |
IAMRevoke | IAM | Revokes IAM permissions granted by an anomolous grant |
OpenFirewall | Compute Engine | Closes an firewall rule that has 0.0.0.0/0 ingress open |
RemovePublicIP | Compute Engine | Removes external IP from a GCE instance |
SnapshotDisk | Compute Engine | Creates a disk snapshot in response to a C2 finding |
UpdatePassword | Cloud SQL | Updates the Cloud SQL root password |
NOTE: Filters are only supported if using SCC Notifications
Sometimes in your environment, you’ll run into a scenario where a finding is a false positive because it is expected in your environment. In this case, we use the Filter Cloud Function to automatically mark findings as false positives in SCC and then set them as INACTIVE so you don’t have to alert on them. To filter, we use a common policy language used in other Google Cloud open source called Rego from the good folks at Open Policy Agent.
To add your own Rego files simply add them in ./config/filters
. The Cloud Function will pick up any files with the .rego
extension except *_test.rego
so please also add tests. Each file must have a single “rule” that evaluates to true if the finding should be filtered. For example, let’s say in a particular project that is low risk, we want to filter out Bad IP findings that look like a valid NTP request, since many times they are. The rego would look like this:
# filename: ntpd.rego
package sra.filter
ntpd {
ipcon := input.finding.sourceProperties.properties.ipConnection
ipcon.destPort == 123
ipcon.protocol == 17
}
A few notes on the syntax:
sra.filter
and the rule must match the filename (without the extension) since the query the Cloud Functiondata.sra.filter.<filename-without-extension>
input.finding
and then all the nested fields beneath it.OPA gives you the ability to test your Rego policies against actual JSON. To do this, simply add the Notification JSON structure into the test and make assertions against it. We give an example of this in ./config/filters/false_positive_test.rego
. You can run tests yourself after you download OPA by trying the following:
cp config/filters/false_positive.rego.sample config/filters/false_positive.rego
cp config/filters/false_positive_test.rego.sample config/filters/false_positive_test.rego
opa test config/filters
Before installation we’ll configure our automations, copy ./config/sra.yaml.sample
to ./config/sra.yaml
. You can also view a mostly filled out sample configuration file. Within this file we’ll define a few steps to get started:
Every automation has a configuration similar to the following example:
apiVersion: security-response-automation.cloud.google.com/v1alpha1
kind: Remediation
metadata:
name: router
spec:
parameters:
etd:
anomalous_iam:
- action: iam_revoke
target:
- organizations/1234567891011/folders/424242424242/*
- organizations/1234567891011/projects/applied-project
excludes:
- organizations/1234567891011/folders/424242424242/projects/non-applied-project
- organizations/1234567891011/folders/424242424242/folders/565656565656/*
properties:
dry_run: true
anomalous_iam:
allow_domains:
- foo.com
The first parameter represents the finding provider, sha
(Security Health Analytics) or etd
(Event Threat Detection).
Each provider lists findings which contain a list of automations to be applied to those findings. In this example we apply the revoke_iam
automation to Event Threat Detection’s Anomalous IAM Grant finding. For a full list of automations and their supported findings see automations.md.
The target
and exclude
arrays accepts an ancestry pattern that is compared against the incoming project. The target and exclude patterns are both considered however the excludes takes precedence. The ancestry pattern allows you to specify granularity at the organization, folder and project level.
Pattern | Description |
organizations/123 | All projects under the organization 123 |
organizations/123/folders/456/ | Any project in folder 456 in organization 123 |
organizations/123/folders/456/projects/789 | Apply to the project 789 in folder 456 in organization 123 |
organizations/123/projects/789 | Apply to the project 789 in organization 123 that is not within a folder |
organizations/123//projects/789 | Apply to the project 789 in organization 123 regardless if its in a folder or not |
All automations have the dry_run
property that allow to see what actions would have been taken. This is recommend to confirm the actions taken are as expected. Once you have confirmed this by viewing logs in Cloud Logging you can change this property to false then redeploy the automations.
The allow_domains
property is specific to the iam_revoke automation. To see examples of how to configure the other automations see the full documentation.
The service account is configured separately within main.tf. Here we inform Terraform which folders we’re enforcing so the required roles are automatically granted. You have a few choices for how to configure this step:
Following these instructions will deploy all automations. Before you get started be sure
you have the following installed:
gcloud auth login --update-adc
terraform init
terraform apply
If you don’t want to install all automations you can specify certain automations individually by running terraform apply --target module.revoke_iam_grants
. The module name for each automation is found in main.tf. Note the module.filter
and module.router
are required to be installed.
TIP: Instead of entering variables every time you can create terraform.tfvars
file and input key value pairs there, i.e.automation-project="aerial-jigsaw-235219"
.
If at any point you want to revert the changes we’ve made just run terraform destroy .
Terraform will create or destroy everything by default. To redeploy a single Cloud Function you can do:
terraform apply --target module.revoke_iam_grants
Name | Description | Type | Default | Required |
---|---|---|---|---|
automation-project | Project ID where the Cloud Functions should be installed. | string |
n/a | yes |
enable-scc-notification | If true, create the notification config from SCC instead of Cloud Logging | bool |
true |
no |
findings-project | (Unused if enable-scc-notification is true) Project ID where Event Threat Detection security findings are sent to by the Security Command Center. Configured in the Google Cloud Console in Security > Threat Detection. |
string |
"" |
no |
folder-ids | Folder IDs on which to grant permission | list(string) |
n/a | yes |
organization-id | Organization ID. | string |
n/a | yes |
Each Cloud Function logs its actions to the below log location. This can be accessed by visiting
Cloud Logging and clicking on the arrow on the right hand side then ‘Convert to advanced filter’.
Then paste in the below filter making sure to change the project ID to the project where your
Cloud Functions are installed.
Function | Filter |
---|---|
Filter | resource.type = "cloud_function" AND resource.labels.function_name = "Filter" |
Router | resource.type = "cloud_function" AND resource.labels.function_name = "Router" |
CloseBucket | resource.type = "cloud_function" AND resource.labels.function_name = "CloseBucket" |
CloseCloudSQL | resource.type = "cloud_function" AND resource.labels.function_name = "CloseCloudSQL" |
ClosePublicDataset | resource.type = "cloud_function" AND resource.labels.function_name = "ClosePublicDataset" |
CloudSQLRequireSSL | resource.type = "cloud_function" AND resource.labels.function_name = "CloudSQLRequireSSL" |
DisableDashboard | resource.type = "cloud_function" AND resource.labels.function_name = "DisableDashboard" |
EnableAuditLogs | resource.type = "cloud_function" AND resource.labels.function_name = "EnableAuditLogs" |
EnableBucketOnlyPolicy | resource.type = "cloud_function" AND resource.labels.function_name = "EnableBucketOnlyPolicy" |
IAMRevoke | resource.type = "cloud_function" AND resource.labels.function_name = "IAMRevoke" |
OpenFirewall | resource.type = "cloud_function" AND resource.labels.function_name = "OpenFirewall" |
RemovePublicIP | resource.type = "cloud_function" AND resource.labels.function_name = "RemovePublicIP" |
SnapshotDisk | resource.type = "cloud_function" AND resource.labels.function_name = "SnapshotDisk" |
UpdatePassword | resource.type = "cloud_function" AND resource.labels.function_name = "UpdatePassword" |
Make sure you have installed the following tools for development and test:
terraform
opa
For additional tools needed for testing:
make tools
To run the same tests that are run in the Pull Request:
make test