The Many Ways to Access ECS

By
Michael Levan
January 18, 2023

Common wisdom is that you should assume AWS services are not secure by default.

Why?

Because, although the services themselves may be secure, it’s the user's responsibility to ensure traffic is secure. This is covered in depth in AWS' Shared Responsibility Model. A lot of the time, unless the specific service or platform is for security, it can’t be expected to be secure out of the box because it’s simply out of scope. It’s up to the engineers and organization as a whole to ensure that services and overall infrastructure is secure. Otherwise, there wouldn't be much for security engineers, or other engineers with security responsibilities, to do.

In this blog post, you’ll learn about a few different methods within Elastic Container Service (ECS) that aren’t secure by default and how to make them more secure.

The ECS Configuration

Before getting started, you’ll first have to set up an ECS configuration. Luckily, the configuration is pretty small in comparison to, for example, setting up and configuring an Elastic Kubernetes Service (EKS) cluster.

You’ll need to define configuration for:

  1. The Terraform provider
  2. The ECS cluster itself

First, ensure that you use the proper Terraform provider for AWS.

Next, with a couple of lines of code, you can deploy an ECS cluster with CloudWatch’s Container Insights enabled automatically for monitoring and observability of your ECS cluster.

Save the configuration in a main.tf file so you can deploy the ECS cluster. To deploy the ECS cluster, first initialize the Terraform configuration.

You should see an output similar to the output below.

You may now begin working with Terraform. Try running terraform plan to seeany changes that are required for your infrastructure. All Terraform commandsshould now work.

If you ever set or change modules or backend configuration for Terraform,rerun this command to reinitialize your working directory. If you forget, othercommands will detect it and remind you to do so if necessary

Next, run terraform plan command to view what’s being created and ensure that there are no configuration bugs in the code and you’re following best practices/mandatory parameters.

Last, apply the configuration to deploy the ECS cluster.

Once the deployment is complete, you’ll see an output similar to the one below.

Configuring the ECS service (as in, the AWS service itself) is lite in comparison to other configurations, which is good, but a bulk of the “what could go wrong from a security perspective” comes after the service is deployed in your AWS environment.

How Is It Insecure?

It’s actually not…

“Isn’t that the entire purpose of this blog post?” you might ask.

Well, yes. Let’s break down what’s meant by the fact that it’s not actually insecure. ECS by itself is simply an AWS service that runs without any other services by default. For example, there are no EC2 instances connected to it by default and there are no applications deployed to it by default. Essentially, ECS is a shell with not a whole lot going on.

The biggest “security issue” out of the box is the IAM configuration (which you’ll see in a section coming up) which has a default configuration for the specific scope of who can see the ECS cluster.

ECS running with the Terraform code in the previous section, as explained, is a shell for you to run other resources. It’s like a house that just went up without a bathroom or sheetrock or lights. It’s there, but it’s not doing all that much.

As you start to deploy resources is when you need to start thinking about security from an ECS perspective.Let’s dive into what that means in the next section.

ECS Service

If you’re deploying an ECS cluster, chances are you're planning to deploy a containerized application.When you want to deploy a containerized application, you’ll have to use an ECS task definition, sometimes referred to as a service (not to be confused with the actual ECS service itself).Let’s configure an ECS task definition.

First, create the task definition itself. This will contain all of the container's specs. If you’re used to deploying Pods with Kubernetes, or containers with Docker Compose, you’re going to notice a lot of similarities. Essentially, you’re configuring how the container should look and be interacted with.

Next, configure the service. The service is based on the task definition (which is why the term task definition and service are sometimes used interchangeably) and specifies how many containers of the task definition you want running (think replica set in Kubernetes).

Altogether, the configuration should look like the below.

Make sure to run terraform init and terraform plan, just like in the previous section when you created the ECS cluster.

Next, deploy the application with terraform apply --auto-approve.

Now, here’s when security holes can start to creep into ECS. There are a few questions you should ask yourself:

  • Where is the containerized application coming from?
  • Does it have any holes in terms of accessing the application? For example, are all ports open?
  • How can users interact with the application?

Although this section is mostly about deploying a containerized application, the same rules will apply to any containerized application that’s deployed anywhere. If you have a security hole in your code, whether that’s the application code or the Terraform code deploying the containerized application, it’ll be a small barrier to entry for attackers.

Capacity Providers and Container Instances

Before a containerized application can be deployed, you need to deploy container instances or use a Fargate profile. For the purposes of this section, you’ll learn the method of using EC2 for container instances. Container instances are EC2 instances running in your AWS environment.

If you go to the ECS cluster that you created, under Infrastructure, you’ll see that there are zero container instances.

Regardless of which option you choose, the outcome is the same. It’s EC2 instances running containerized workloads for your ECS cluster.

From a security perspective, you must ask yourself a very important question - are the EC2 instances secure?

EC2 instances are just like any other virtual machine. They run operating systems, various workloads, require updates, and have security risks.If you have an EC2 instance running containerized workloads and any attacker can breach the EC2 instance, all of your applications could be a major target.

To fix this, you should do the following:

  1. Ensure that you follow proper hardening guides for your operating system. CIS Benchmarks are a great place to start.
  2. Ensure that only the proper ports are open for your EC2 instance. Don’t open all ports.
  3. Ensure that however the EC2 instance is accessed (for example, an SSH key), is properly secured.

For more on securing EC2, check out Sym's The Many Ways to Access EC2.

Namespaces

Last but not least, there’s a newer service that came out recently for ECS called namespaces.

If you’re familiar with namespaces in Linux, or namespaces in Kubernetes, it’s the same thing.

The purpose is to segregate and isolate workloads.

Although this is not “security” out of the box, it does allow you to create some isolation and overall guidelines for how applications are running and what other containers those applications are running next to.

Implementing namespaces is a great first step to ensuring that the containerized applications that you’ve deployed are separated in the methods of your choosing.

More?

If you liked this blog post and want to see the various ways of securing ECS, it’s highly recommended to take a look at the overall best practices for ECS security, which you find on AWS' Security Best Practices Guide.

Recommended Posts