Whether you’re already one of Amazon Web Services’ (AWS) one million active customers, or looking to migrate some (or all) of your enterprise’s workloads into the cloud, you’ve more than likely encountered questions and concerns around the security of the platform.
In fact, security is far and away the cloud provider’s biggest inhibitor to adoption. The truth is that AWS is inherently no more or less safe than your on-premise environment. It’s all about the steps you take to safeguard applications and data. There are, however, definite steps you can take to minimize risk.
Here’s a rundown of the top five AWS security best practices.
1. Review AWS Security Basics
A good place to start for Amazon Web Services best practices? The company’s own whitepaper advice, which details a number of general tips to help you get the most from AWS while protecting essential resources. Think of these as the foundation for long-term cloud security; they’re applicable across corporate networks, and cloud platforms and essential for effective management for compliant AWS hosting.
The Amazon guidance recommends:
- Changing all vendor-supplied defaults before creating new Amazon Machine Images (AMI) or deploying new applications. This means creating new hardware passwords, adjusting simple network management protocol (SNMP) community strings, and updating basic security configuration.
- Disabling all unnecessary or redundant user accounts.
- Implementing a “single primary function” model for each Amazon Elastic Cloud Compute [EC2] instance. For example, this means keeping web servers, databases, and DNS separate to ensure maximum security.
- Disabling all unnecessary functions such as scripts, drivers, features, and subsystems.
2. Handle Root Concerns
Next up? Dealing with root concerns. A recent CIS Foundations benchmark makes the case that it’s preferable to avoid the use of root account permissions wherever possible since it has unrestricted access to all resources in your AWS account. By adopting the “principle of least privilege” and reducing the total number of permissions, you can significantly increase total security.
Here’s your best bet: Opt for real-time monitoring of API calls using CloudTrail logs, in addition to establishing both metric filters and alarms for any root login attempts. Implementing the metric filter and alarm requires three steps:
Create a filter that checks for root account usage:
aws logs put-metric-filter –log-group-name <cloudtrail_log_group_name> –filter-name
<root_usage_metric> –metric-transformations metricName=<root_usage_metric>,metricNamespace=’CISBenchmark’,metricValue=1 —
filter-pattern ‘{ $.userIdentity.type = “Root” && $.userIdentity.invokedBy NOT EXISTS
&& $.eventType != “AwsServiceEvent” }’
Create an SNS topic for the alarm:
aws sns create-topic –name <sns_topic_name>
Create an SNS subscription for this topic:
aws sns subscribe –topic-arn <sns_topic_arn> –protocol <protocol_for_sns> —
notification-endpoint <sns_subscription_endpoints>
The result? You get a notification any time root access is requested, letting you track down rogue accounts or identify potential attacks.
3. Always Go Multi-Factor
Of all the AWS security best practices, this one is easy to implement and often overlooked. Always use multi-factor authentication (MFA), also known as two-factor authentication. Amazon recommends enabling MFA for any account that has a console password since users will be compelled to produce their username, password, and a time-sensitive MFA key.
Start by determining which accounts already have MFA enabled. Then, open your Identity and Access Management (IAM) console. In the left pane select “users.” Ensure that all users with a checkmark in the “password” column also have a checkmark in the “MFA Device” column. If you need to implement a new MFA device, select the username, then the Security Credentials tab, and then Manage MFA Device. Using the MFA Device Wizard you can select a virtual MFA device. If required, you can also implement forced IAM self-service remediation that compels users to complete MFA setup before they can access full permissions on AWS accounts.
4. Establish Secure VPCs
Beyond root concerns and user authentication, it’s also critical to create protected virtual infrastructure, which means securing all virtual private clouds (VPCs). Doing so isn’t terribly difficult. DZone recommends modifying the default VPC offering by splitting each availability zone into a public and private subnet; then using NAT gateways to route both public and private subnets to the Internet at large. While it’s possible to manage your own NAT configuration, AWS can automatically generate one VPC NAT gateway per subnet, perform automatic updates to routing tables, and assign elastic IPs to each gateway.
5. Keys, Buckets and Permissions
Last but not least: Tackle some of the most common security missteps made when companies make the shift to AWS. This includes encrypting your Amazon relational database services (RDS) if they’re not already encrypted at the storage level; AWS provides RDS encryption to ensure data at rest is not at risk. In many cases, this also fulfills corporate compliance requirements such as those mandated by HIPAA or PCI DSS.
It’s also a good idea to rotate IAM keys for users every three months to ensure old keys aren’t being used to access high-level services. Finally, opt for written access policies over S3 bucket permissions; the List access function, for example, can cause cost spikes if users who don’t need the function are listing objects at high frequency.
Ready to make the most of your cloud deployment? Start with these AWS security best practices to ensure you’re laying the secure groundwork, dealing with root issues, opting for secure authentication and VPCs, and dealing with popular security slip-ups.
Updated: January 2019