AWS S3 is an object storage that many industries choose it becaue of the high availability, ease to use, and the cheap price that AWS provides. However, when it comes to production, security always has more priority than any other factors. As S3 is vulnerable to internet attackers by default, we will go through all the security best practices that we can add on S3, using Terraform.
Basic Permission
Let’s assume there’s a role called alice
. In order to allow role Alice to perform get
operation to thes3 bucket profile
, we have to setup both bucket policy
and and iam role policy
.
Bucket policy
resource "aws_s3_bucket" "profile" {
bucket = "profile_bucket"
}data "aws_iam_policy_document" "allow_alice" {
statement {
principals {
type = "AWS"
identifiers = [aws_iam_role.alice.arn]
}
actions = [s3:GetBucket]
resources = ["${aws_s3_bucket.profile.arn}/*"]
}
}resource "aws_s3_pucket_policy" "allow_alice" {
bucket = aws_s3_bucket.profile.id
policy = data.aws_iam_policy_document.allow_alice.json
}
Role policy
data "aws_iam_policy_document" "alice_as" {
...
}
resource "aws_iam_role" "alice" {
name = "alice_role"
assume_role_policy = data.aws_iam_policy_document.alice_as.json
}
data "aws_iam_policy_document" "alice" {
statement {
effect = "Allow"
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.profile.arn}/*"]
}
}
resource "aws_iam_policy" "alice" {
name = "allow-alice-s3-get"
policy = aws_iam_policy_document.alice.json
}
resource "aws_iam_role_policy_attachment" "alice" {
role = aws_iam_role.alice.name
policy_arn = aws_iam_policy.alice.arn
}
Make sure you always grant least privilege access to any identity. If Alice should not be able to delete objects in the profile
bucket, don’t grant her s3:DeleteObject
action.
You can also setup a permission boundary for a role.
Encryption of data at rest
Encryption of data at rest prevent attackers from getting the data directly from the data base center, instead of using s3 APIs. Using profile
bucket as an example, we can setup encryption at rest by adding server_side_encryption_configuration
:
resource "aws_s3_bucket" "profile" {
bucket = "profile_bucket"
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
Here we are using SSE-S3. There are more optionals that can be configured here.
Enforce encryption of data in transit
Enforcing encryption of data in transit prevents data being stolen by attackers performing Man-in-the-middle attack. We can simple add this layer of protection by denying any communication not using HTTPS(TLS) in the bucket_policy
:
data "aws_iam_policy_document" "allow_alice" {
statement {
principals {
type = "AWS"
identifiers = [aws_iam_role.alice.arn]
}
actions = [s3:GetBucket]
resources = ["${aws_s3_bucket.profile.arn}/*"]
}
statement {
effort = "Deny"
principals {
type = "*"
identifiers = ["*"]
}
actions = ["s3:*"]
resources = [
"${aws_s3_bucket.profile.arn}/*",
"${aws_s3_bucket.profile.arn}"
]
condition {
test = "Bool"
variable = "aws:SecureTransport"
values = ["false"]
}
}
}
This will deny any attemp to interact with profile
bucket if SecureTransport
= “false”.
Consider S3 Object Lock
Once we set up object lock, we can no longer update or delete any object that is already created in the bucket. Be careful that once you enabled object lock, you cannot remove the lock. You cannot change bucket configuration anymore using terraform once the bucket is created. Thus I assume adding this lock when you are sure you have everything settled.
resource "aws_s3_bucket" "profile" {
...
object_lock_configuration {
object_lock_enabled = "Enabled"
}
...
}
I have to say I seldom use this. It’s probably useful for only logs, or some Write once read many
object as a source of truth.
Enable versioning
This is straight forward. Versioning enable you to view file history when you have to update the same file name multiple times. It also works with life cycle rules so you know when the object is moved out of the standard
class.
resource "aws_s3_bucket" "profile" {
bucket = "profile_bucket"
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
versioning {
enabled = true
}
}
Cross-region replication
S3 by default stores your data in a region cross different AZs. But if you are worring about all the data centers in the region are down in the same time, you can manually setup cross-region replication.
resource "aws_s3_bucket" "profile_backup" {
bucket = "profile-backup"
versioning_configuration {
status = "Enabled"
}
}resource "aws_s3_bucket_replication_configuration" "profile_backup" {
...
role = aws_iam_role.replication.arn
bucket = aws_s3_bucket.profile.id
rule {
status = "Enabled"
destination {
bucket = aws_s3_bucket.profile_backup.arn
storage_class = "STANDARD"
}
}
}
Noted that we need to create another iam_role
to replicate s3 objects.
VPC endpoints for Amazon S3 access
You can create a vpc endpoint for s3 if you don’t want your traffic to be routed to the outside world to get the s3 object. You can add a bucket policy statement to deny s3 access from outside of the VPC you specified:
data "aws_iam_policy_document" "allow_alice" {
...
statement {
effort = "Deny"
principals {
type = "*"
identifiers = ["*"]
}
actions = ["s3:*"]
resources = [
"${aws_s3_bucket.profile.arn}/*",
"${aws_s3_bucket.profile.arn}"
]
condition {
test = "StringNotEquals"
variable = "aws:SourceVpce"
values = ["vpce-1a2b3c4d"]
}
}
...
}
Consider whether your application needs internet access before setting this up. If your application needs to connect to the outside world, you won’t setup a vpc endpoint, so you won’t need this layer of protection.
Conclusion
Security has become more and more important these days with so many confidential data storing in the cloud. Hopefully this article helps you configure a security storage of your application against the hackers from all over the world!